80 research outputs found

    A hand gesture recognition technique for human-computer interaction

    Get PDF
    We propose an approach to recognize trajectory-based dynamic hand gestures in real time for human-computer interaction (HCI). We also introduce a fast learning mechanism that does not require extensive training data to teach gestures to the system. We use a six-degrees-of-freedom position tracker to collect trajectory data and represent gestures as an ordered sequence of directional movements in 2D. In the learning phase, sample gesture data is filtered and processed to create gesture recognizers, which are basically finite-state machine sequence recognizers. We achieve online gesture recognition by these recognizers without needing to specify gesture start and end positions. The results of the conducted user study show that the proposed method is very promising in terms of gesture detection and recognition performance (73% accuracy) in a stream of motion. Additionally, the assessment of the user attitude survey denotes that the gestural interface is very useful and satisfactory. One of the novel parts of the proposed approach is that it gives users the freedom to create gesture commands according to their preferences for selected tasks. Thus, the presented gesture recognition approach makes the HCI process more intuitive and user specific. © 2015 Elsevier Inc. All rights reserved

    Extraction of 3D navigation space in virtual urban environments

    Get PDF
    Urban scenes are one class of complex geometrical environments in computer graphics. In order to develop navigation systems for urban sceneries, extraction and cellulization of navigation space is one of the most commonly used technique providing a suitable structure for visibility computations. Surprisingly, there is not much work done for the extraction of the navigable area automatically. Urban models, except for the ones where the building footprints are used to generate the model, generally lack navigation space information. Because of this, it is hard to extract and discretize the navigable area for complex urban scenery. In this paper, we propose an algorithm for the extraction of navigation space for urban scenes in threedimensions (3D). Our navigation space extraction algorithm works for scenes, where the buildings are in high complexity. The building models may have pillars or holes where seeing through them is also possible. Besides, for the urban data acquired from different sources which may contain errors, our approach provides a simple and efficient way of discretizing both navigable space and the model itself. The extracted space can instantly be used for visibility calculations such as occlusion culling in 3D space. Furthermore, terrain height field information can be extracted from the resultant structure, hence providing a way to implement urban navigation systems including terrains

    Realistic rendering and animation of a multi-layered human body model

    Get PDF
    A framework for realistic rendering of a multi-layered human body model is proposed in this paper. The human model is composed of three layers: skeleton, muscle, and skin. The skeleton layer, represented by a set of joints and bones, controls the animation of the human body using inverse kinematics. Muscles are represented with action lines that are defined by a set of control points. An action line applies the force produced by a muscle on the bones and on the skin mesh. The skin layer is modeled as a 3D mesh and deformed during animation by binding the skin layer to both the skeleton and muscle layers. The skin is deformed by a two-step algorithm according to the current state of the skeleton and muscle layers. Performance experiments show that it is possible to obtain real-time frame rates for a moderately complex human model containing approximately 33,000 triangles on the skin layer. © 2006 IEEE

    Stereoscopic urban visualization based on graphics processor unit

    Get PDF
    We propose a framework for the stereoscopic visualization of urban environments. The framework uses occlusion and view-frustum culling (VFC) and utilizes graphics hardware to speed up the rendering process. The occlusion culling is based on a slice-wise storage scheme that represents buildings using axis-aligned slices. This provides a fast and a low-cost way to access the visible parts of the buildings. View-frustum culling for stereoscopic visualization is carried out once for both eyes by applying a transformation to the culling location. Rendering using graphics hardware is based on the slice-wise building representation. The representation facilitates fast access to data that are pushed into the graphics procesing unit (GPU) buffers. We present algorithms to access this GPU data. The stereoscopic visualization uses off-axis projection, which we found more suitable for the case of urban visualization. The framework is tested on large urban models containing 7.8 million and 23 million polygons. Performance experiments show that real-time stereoscopic visualization can be achieved for large models. © 2008 Society of Photo-Optical Instrumentation Engineers

    Free-form solid modeling using deformations

    Get PDF
    One of the most important problems of available solid modeling systems is that the range of shapes generated is limited. It is not easy to model objects with free-form surfaces in a conventional solid modeling system. Such objects can be defined arbitrarily, but then operations on them are not transparent and complications occur. A method for achieving free-form effect is to define regular objects or surfaces, then deform them. This keeps various properties of the model intact while achieving the required visual appearance. This paper discusses a number of geometric modeling techniques with deformations applied to them in attempts to combine various approaches developed so far. © 1990

    Physically-based simulation of hair strips in real-time

    Get PDF
    In this paper, we present our implementation of physically-based simulation of hair strips. We used a mass-spring model followed by a hybrid approach where particle systems and the method of clustering of hair strands are employed. All the forces related to springs are implemented: gravity, repulsions from collisions (head and ground), absorption (ground only), frictions (ground and air), internal spring frictions. Real-time performance is achieved for physically-based simulation of hair strips and promising results in terms of the realistic hair behavior and hair rendering are obtained. Copyright UNION Agency - Science Press

    Animation of deformable models

    Get PDF
    Although kinematic modelling methods are adequate for describing the shapes of static objects, they are insufficient when it comes to producing realistic animation. Physically based modelling remedies this problem by including forces, masses, strain energies and other physical quantities. The paper describes a system for the animation of deformable models. The system uses physically based modelling methods and approaches from elasticity theory for animating the models. Two different formulations, namely the primal formulation and the hybrid formulation, are implemented so that the user can select the one most suitable for an animation depending on the rigidity of the models. Collision of the models with impenetrable obstacles and constraining of the model points to fixed positions in space are implemented for use in the animations. © 1994

    A virtual garment design and simulation system

    Get PDF
    In this paper, a 3D graphics environment for virtual garment design and simulation is presented. The proposed system enables the three dimensional construction of a garment from its cloth panels, for which the underlying structure is a mass-spring model. The garment construction process is performed through automatic pattern generation, posterior correction, and seaming. Afterwards, it is possible to do fitting on virtual mannequins as if in a real life tailor's workshop. The system provides the users with the flexibility to design their own garment patterns and make changes on the garment even after the dressing of the model. Furthermore, rendering alternatives for the visualization of knitted and woven fabric are presented. © 2007 IEEE

    Message from the chairs

    Get PDF
    [No abstract available

    Sun position estimation and tracking for virtual object placement in time-lapse videos

    Get PDF
    Realistic illumination of virtual objects placed in real videos is important in terms of achieving visual coherence. We propose a novel approach for illumination estimation on time-lapse videos and seamlessly insert virtual objects in these videos in a visually consistent way. The proposed approach works for both outdoor and indoor environments where the main light source is the Sun. We first modify an existing illumination estimation method that aims to obtain sparse radiance map of the environment in order to estimate the initial Sun position. We then track the hard ground shadows on the time-lapse video by using an energy-based pixel-wise method. The proposed method aims to track the shadows by utilizing the energy values of the pixels that forms them. We tested the method on various time-lapse videos recorded in outdoor and indoor environments and obtained successful results. © 2016, Springer-Verlag London
    corecore